14 research outputs found

    Real-time visual perception : detection and localisation of static and moving objects from a moving stereo rig

    Get PDF
    International audienceWe present a novel method for scene reconstruction and moving object detection and tracking, using extensive point tracking (typically more than 4000 points per frame) over time. Current neighbourhood is reconstructed in the form of a 3D point cloud, which allows for extra features (ground detection, path planning, obstacle detection). Reconstruction framework takes moving objects into account, and tracking over time allows for trajectory and speed estimation

    Extended occupation grids for non-rigid moving objects tracking

    Get PDF
    International audienceWe present an evolution of traditional occupancy grid algorithm, based on an extensive probabilistic calculus of the evolution of several variables on a cell neighbourhood. Occupancy, speed and classification are taken into account, the aim being to improve overall perception of an highly changing un- structured environment. Contrary to classical SLAM algorithms, no requisite is made on the amount of rigidity of the scene, and tracking do not rely on geometrical characteristics. We believe that this could have important applications in the automotive field, both from autonomous vehicle and driver assistance, in some areas difficult to address with current algorithms. This article begins with a general presentation of what we aim to do, along with considerations over traditional occupancy grids limits and their reasons. We will then present our proposition, and detail some of its key aspects, namely update rules and perfor- mance consequences. A second part will be more practical, and will begin with a brief presentation of the GPU implementation of the algorithm, before turning to sensor models and some results

    Proposition for propagated occupation grids for non-rigid moving objects tracking

    Get PDF
    International audienceAutonomous navigation among humans is, however simple it might seems, a difficult subject which draws a lot a attention in our days of increasingly autonomous systems. From a typical scene from a human environment, diverse shapes, behaviours, speeds or colours can be gathered by a lot of sensors ; and a generic mean to perceive space and dynamics is all the more needed, if not easy. We propose an incremental evolution over the well-known occupancy grid paradigm, introducing grid cell propagation over time and a limited neighbourhood, handled by probabilistic calculus. Our algorithm runs in real-time from a GPU implementation, and considers completely generically space-cells propagation, without any a priori requirements. It produces a set of belief maps of our environment, handling occupancy, but also items dynamics, relative rigidity links, and an initial object classification. Observations from free-space sensors are thus turned into information needed for autonomous navigation

    Proposition for propagated occupation grids for non-rigid moving objects tracking

    Get PDF
    International audienceAutonomous navigation among humans is, however simple it might seems, a difficult subject which draws a lot a attention in our days of increasingly autonomous systems. From a typical scene from a human environment, diverse shapes, behaviours, speeds or colours can be gathered by a lot of sensors ; and a generic mean to perceive space and dynamics is all the more needed, if not easy. We propose an incremental evolution over the well-known occupancy grid paradigm, introducing grid cell propagation over time and a limited neighbourhood, handled by probabilistic calculus. Our algorithm runs in real-time from a GPU implementation, and considers completely generically space-cells propagation, without any a priori requirements. It produces a set of belief maps of our environment, handling occupancy, but also items dynamics, relative rigidity links, and an initial object classification. Observations from free-space sensors are thus turned into information needed for autonomous navigation

    Track-to-Track Fusion Using Split Covariance Intersection Filter-Information Matrix Filter (SCIF-IMF) for Vehicle Surrounding Environment Perception

    Get PDF
    International audienceVehicle surrounding environment perception is an important process for many applications. Nowadays, a tendency is to incorporate redundant and complementary sensors into an intelligent vehicle, in order to enhance its perception ability; then an essential issue arises naturally, i.e. what fusion architecture can be used to combine the data from multiple sensors? In this paper, we propose a new track-totrack fusion architecture using the split covariance intersection filter-information matrix filter (SCIF-IMF). The basic idea is to use the IMF (adapted for estimates in split form) to handle the track temporal correlation of each sensor system and to use the SCIF to handle track spatial correlation. The proposed architecture enjoys complete sensor modularity and thus enables flexible self-adjustment. A simulation based comparative study is presented, which shows that the track-totrack fusion architecture using the SCIF-IMF can achieve centralized architecture comparable performance

    Track-to-track fusion using split covariance intersection filter-information matrix filter (SCIF-IMF) for vehicle surrounding environment perception

    Get PDF
    International audienceVehicle surrounding environment perception is an important process for many applications. Nowadays, a tendency is to incorporate redundant and complementary sensors into an intelligent vehicle, in order to enhance its perception ability; then an essential issue arises naturally, i.e. what fusion architecture can be used to combine the data from multiple sensors? In this paper, we propose a new track-totrack fusion architecture using the split covariance intersection filter-information matrix filter (SCIF-IMF). The basic idea is to use the IMF (adapted for estimates in split form) to handle the track temporal correlation of each sensor system and to use the SCIF to handle track spatial correlation. The proposed architecture enjoys complete sensor modularity and thus enables flexible self-adjustment. A simulation based comparative study is presented, which shows that the track-totrack fusion architecture using the SCIF-IMF can achieve centralized architecture comparable performance

    Detection, localisation and tracking of obstacles and moving objects, from a stereovision setup

    No full text
    Cette thèse s'inscrit dans la problématique de la perception des véhicules autonomes, qui doivent notamment être capables de détecter et de positionner à tout moment les éléments fixes et mobiles de leur environnement. Les besoins sont ensuite multiples, de la détection d'obstacles à la localisation du porteur dans l'espace, et de nombreuses méthodes de la littérature s'y attellent. L'objectif de cette thèse est de reconstituer, à partir de prises de vues de stéréo-vision, une carte en trois dimensions décrivant l'environnement proche ; tout en effectuant une détection, localisation et suivi dans le temps des objets mobiles.La détection et le suivi dans le temps d'un grand nombre de points d'intérêt constitue une première étape. Après avoir effectué une comparaison exhaustive de divers détecteurs de points d'intérêt de la littérature, on propose pour réaliser le suivi de points une implémentation massivement parallélisée de l'algorithme KLT, dans une configuration redondante réalisée pendant cette thèse. Cette implémentation autorise le suivi fiable de milliers de points en temps réel, et se compare favorablement à l'état de l'art.Il s'agit ensuite d'estimer le déplacement du porteur, et de positionner ces points dans l'espace, tâche pour laquelle on propose une évolution robuste d'une procédure bien connue, dite "SVD", suivie d'un filtrage par UKF, qui nous permettent d'estimer très rapidement le mouvement propre du porteur. Les points suivis sont ensuite positionnés dans l'espace, en prenant en compte leur possible mobilité, en estimant continuellement la position la plus probable compte tenu des observations successives.La détection et le suivi des objets mobiles font l'objet d'une dernière partie, dans laquelle on propose une segmentation originale tenant compte des aspects de position et de vitesse. On exploite ainsi une des singularités de notre approche, qui conserve pour chaque point positionné un ensemble cohérent de positions dans le temps. Le filtrage et le suivi des cibles se basent finalement sur un filtre GM-PHD.This PhD work is to be seen within the context of autonomous vehicle perception, in which the detection and localisation of elements of the surroundings in real time is an obvious requirement. Subsequent perception needs are manyfold, from localisation to obstacle detection, and are the subject of a continued research interest. The goal of this work is to build, in real time and from stereovision acquisition, a 3D map of the surroundings ; while detecting and tracking moving objects.Interest point selection and tracking on picture space are a first step, which we initiate by a thorough comparison of detectors from the literature. As regards tracking, we propose a massively parallel implementation of the standard KLT algorithm, using redundant tracking to provide reliable quality estimation. This allows us to track thousands of points in real-time, which compares favourably to the state of the art.Next step is the ego-motion estimation, along with the positioning of tracked points in 3D space. We first propose an iterative variant of the well known “SVD” process followed by UKF filtering, which allows for a very fast and reliable estimation. Then the position of every followed interest point is filtered on the fly over time, in contrast to most dense approaches from the literature.We finally propose a segmentation of moving objects in the augmented position-speed space, which is made possible by our continuous estimation of feature points position. Target tracking and filtering finally use a GM-PHD approach

    Détection, localisation et suivi des obstacles et objets mobiles à partir d'une plate forme de stéréo-vision

    No full text
    This PhD work is to be seen within the context of autonomous vehicle perception, in which the detection and localisation of elements of the surroundings in real time is an obvious requirement. Subsequent perception needs are manyfold, from localisation to obstacle detection, and are the subject of a continued research interest. The goal of this work is to build, in real time and from stereovision acquisition, a 3D map of the surroundings ; while detecting and tracking moving objects.Interest point selection and tracking on picture space are a first step, which we initiate by a thorough comparison of detectors from the literature. As regards tracking, we propose a massively parallel implementation of the standard KLT algorithm, using redundant tracking to provide reliable quality estimation. This allows us to track thousands of points in real-time, which compares favourably to the state of the art.Next step is the ego-motion estimation, along with the positioning of tracked points in 3D space. We first propose an iterative variant of the well known “SVD” process followed by UKF filtering, which allows for a very fast and reliable estimation. Then the position of every followed interest point is filtered on the fly over time, in contrast to most dense approaches from the literature.We finally propose a segmentation of moving objects in the augmented position-speed space, which is made possible by our continuous estimation of feature points position. Target tracking and filtering finally use a GM-PHD approach.Cette thèse s'inscrit dans la problématique de la perception des véhicules autonomes, qui doivent notamment être capables de détecter et de positionner à tout moment les éléments fixes et mobiles de leur environnement. Les besoins sont ensuite multiples, de la détection d'obstacles à la localisation du porteur dans l'espace, et de nombreuses méthodes de la littérature s'y attellent. L'objectif de cette thèse est de reconstituer, à partir de prises de vues de stéréo-vision, une carte en trois dimensions décrivant l'environnement proche ; tout en effectuant une détection, localisation et suivi dans le temps des objets mobiles.La détection et le suivi dans le temps d'un grand nombre de points d'intérêt constitue une première étape. Après avoir effectué une comparaison exhaustive de divers détecteurs de points d'intérêt de la littérature, on propose pour réaliser le suivi de points une implémentation massivement parallélisée de l'algorithme KLT, dans une configuration redondante réalisée pendant cette thèse. Cette implémentation autorise le suivi fiable de milliers de points en temps réel, et se compare favorablement à l'état de l'art.Il s'agit ensuite d'estimer le déplacement du porteur, et de positionner ces points dans l'espace, tâche pour laquelle on propose une évolution robuste d'une procédure bien connue, dite "SVD", suivie d'un filtrage par UKF, qui nous permettent d'estimer très rapidement le mouvement propre du porteur. Les points suivis sont ensuite positionnés dans l'espace, en prenant en compte leur possible mobilité, en estimant continuellement la position la plus probable compte tenu des observations successives.La détection et le suivi des objets mobiles font l'objet d'une dernière partie, dans laquelle on propose une segmentation originale tenant compte des aspects de position et de vitesse. On exploite ainsi une des singularités de notre approche, qui conserve pour chaque point positionné un ensemble cohérent de positions dans le temps. Le filtrage et le suivi des cibles se basent finalement sur un filtre GM-PHD
    corecore